Convergence and Error Bounds for Passive Stochastic Algorithms Using Vanishing Step Size
نویسندگان
چکیده
منابع مشابه
Error bounds for constant step-size Q-learning
We provide a bound on the first moment of the error in the Q-function estimate resulting from fixed step-size algorithms applied to finite state-space, discounted reward Markov decision problems. Motivated by Tsitsiklis’ proof for the decreasing step-size case, we decompose the Q-learning update equations into a dynamical system driven by a noise sequence and another dynamical system whose stat...
متن کاملConvergence diagnostics for stochastic gradient descent with constant step size
Iterative procedures in stochastic optimization are typically comprised of a transient phase and a stationary phase. During the transient phase the procedure converges towards a region of interest, and during the stationary phase the procedure oscillates in a convergence region, commonly around a single point. In this paper, we develop a statistical diagnostic test to detect such phase transiti...
متن کاملBounds for Small-Error and Zero-Error Quantum Algorithms
We present a number of results related to quantum algorithms with small error probability and quantum algorithms that are zero-error. First, we give a tight analysis of the trade-offs between the number of queries of quantum search algorithms, their error probability, the size of the search space, and the number of solutions in this space. Using this, we deduce new lower and upper bounds for qu...
متن کاملConvergence rate analysis and error bounds for projection algorithms in convex feasibility problems
Convergence rate analysis and error bounds for projection algorithms in convex feasibility problems Amir Beck & Marc Teboulle To cite this article: Amir Beck & Marc Teboulle (2003) Convergence rate analysis and error bounds for projection algorithms in convex feasibility problems, Optimization Methods and Software, 18:4, 377-394, DOI: 10.1080/10556780310001604977 To link to this article: http:/...
متن کاملMean-square Convergence of Stochastic Multi-step Methods with Variable Step-size
Abstract. We study mean-square consistency, stability in the mean-square sense and meansquare convergence of drift-implicit linear multi-step methods with variable step-size for the approximation of the solution of Itô stochastic differential equations. We obtain conditions that depend on the step-size ratios and that ensure mean-square convergence for the special case of adaptive two-step Maru...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Journal of Mathematical Analysis and Applications
سال: 1996
ISSN: 0022-247X
DOI: 10.1006/jmaa.1996.0217